首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   717篇
  免费   75篇
  国内免费   66篇
工业技术   858篇
  2024年   1篇
  2023年   10篇
  2022年   15篇
  2021年   20篇
  2020年   16篇
  2019年   6篇
  2018年   13篇
  2017年   16篇
  2016年   34篇
  2015年   28篇
  2014年   50篇
  2013年   37篇
  2012年   53篇
  2011年   52篇
  2010年   50篇
  2009年   50篇
  2008年   53篇
  2007年   59篇
  2006年   41篇
  2005年   33篇
  2004年   43篇
  2003年   29篇
  2002年   26篇
  2001年   26篇
  2000年   18篇
  1999年   16篇
  1998年   12篇
  1997年   12篇
  1996年   12篇
  1995年   4篇
  1994年   5篇
  1993年   4篇
  1992年   1篇
  1991年   1篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1986年   2篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1977年   1篇
  1964年   1篇
排序方式: 共有858条查询结果,搜索用时 15 毫秒
1.
In recent years, the light field (LF) as a new imaging modality has attracted wide interest. The large data volume of LF images poses great challenge to LF image coding, and the LF images captured by different devices show significant differences in angular domain. In this paper we propose a view prediction framework to handle LF image coding with various sampling density. All LF images are represented as view arrays. We first partition the views into reference view (RV) set and intermediate view (IV) set. The RVs are rearranged into a pseudo sequence and directly compressed by a video encoder. Other views are then predicted by the RVs. To exploit the four dimensional signal structure, we propose the linear approximation prior (LAP) to reveal the correlation among LF views and efficiently remove the LF data redundancy. Based on the LAP, a distortion minimization interpolation (DMI) method is used to predict IVs. To robustly handle the LF images with different sampling density, we propose an Iteratively Updating depth image based rendering (IU-DIBR) method to extend our DMI. Some auxiliary views are generated to cover the target region and then the DMI calculates reconstruction coefficients for the IVs. Different view partition patterns are also explored. Extensive experiments on different types LF images also valid the efficiency of the proposed method.  相似文献   
2.
本文介绍了一种高准确度质量法液体流量标准装置的设计方法,为对市场中出现的高准确度科里奥利质量流量计的量值传递提供了可能。文中介绍了装置设计过程中,为提高装置准确度用到的电动换向、衡器自校准、蒸馏水代替自来水等方法。试验证明,上述方法的应用使装置达到了预期的不确定度水平。文章还着重介绍了高准确度静态质量法液体流量标准装置不确定度的评价要点,对装置的不确定度水平进行了评定。  相似文献   
3.
袁承宗  李贤  张震  叶盛  蔡非凡 《电视技术》2015,39(11):122-125
由于目前现有工业用摄像机无法满足空间大温度范围、强振动环境等要求,提出了能够适应空间环境的基于以太网络视频传输的高清一体摄像机.该摄像机在完成包括H.264,JPEG2000,MPEG4等图像编码算法的同时,支持空间以太网络协议进行传输和控制.针对宇航环境使用要求,从结构设计、力学设计、热设计、电磁兼容设计等方面进行可靠性设计和仿真验证,满足质量轻、温度适应性好、可靠性高等要求.经过鉴定试验验证,所设计的高清一体摄像机可以可靠地在空间严苛环境中工作.  相似文献   
4.
使用立体视觉系统可确定任意物体的三维轮廓及其表面任意点的位置信息和深度信息。本文将立体视觉运用于人体头部目标检测,设计了一种基于图像深度信息处理的人体头部识别系统。系统采用Xtion摄像机采集场景的深度图像,并对其进行特征分析,根据深度图像的特点以及头部的特征确定头部目标区域。再对目标区域采用Mean Shift算法进行聚类处理,得到清晰的图像边缘。最后通过基于动态阈值的一维熵函数分割法实现头部的分割识别。该系统可快速锁定目标区域,减少了算法的计算量,大大提高了系统的识别速度。此外,采集深度图像的xtion摄像机悬挂在目标场景的正上方,因而较好的解决了目前人体目标检测受遮挡的问题。经实验论证,该系统有较高的识别精度。  相似文献   
5.
With the rapid development of stud welding technology, weld studs are extensively used in automobile industry. A weld stud differs from a regular stud in that it has special external features. This study develops a novel non-contact, fast, and high-precision method based on monocular vision for measuring the position and attitude parameters of a weld stud. Under this method, the measuring principle for the weld stud’s position and attitude parameters is derived and an accurate mathematical model is set up. Based on this mathematical model, a precise calibration method for the projective transformation’s corresponding relationship parameters is developed and an optimal observation condition is then introduced as the constraint into the measurement process so as to enhance the location precision. The experiment results show that the proposed method is fast and accurate, and it satisfies the requirement of online, flexible, fast, and high-precision measurement of weld studs’ pose parameters in automobile and other manufacturing industries.  相似文献   
6.
This paper proposes a new method for self-calibrating a set of stationary non-rotating zooming cameras. This is a realistic configuration, usually encountered in surveillance systems, in which each zooming camera is physically attached to a static structure (wall, ceiling, robot, or tripod). In particular, a linear, yet effective method to recover the affine structure of the observed scene from two or more such stationary zooming cameras is presented. The proposed method solely relies on point correspondences across images and no knowledge about the scene is required. Our method exploits the mostly translational displacement of the so-called principal plane of each zooming camera to estimate the location of the plane at infinity. The principal plane of a camera, at any given setting of its zoom, is encoded in its corresponding perspective projection matrix from which it can be easily extracted. As a displacement of the principal plane of a camera under the effect of zooming allows the identification of a pair of parallel planes, each zooming camera can be used to locate a line on the plane at infinity. Hence, two or more such zooming cameras in general positions allow the obtainment of an estimate of the plane at infinity making it possible, under the assumption of zero-skew and/or known aspect ratio, to linearly calculate the camera's parameters. Finally, the parameters of the camera and the coordinates of the plane at infinity are refined through a nonlinear least-squares optimization procedure. The results of our extensive experiments using both simulated and real data are also reported in this paper.  相似文献   
7.
An efficient disparity estimation algorithm for multi-view video sequences, recorded by a two-dimensional camera array in which the cameras are spaced equidistantly, is presented. Because of the strong geometrical relationship among views, the disparity vectors of a certain view can for most blocks be derived from the disparity vectors of other views. A frame constructed using that idea is called a D frame in this work. Three new prediction schemes which contain D frames are proposed for encoding 5 × 3 multi-view video sequences. The schemes are applied to several multi-view image sequences taken from a camera-array and they are compared in terms of quality, bit-rate and complexity. The experimental results show that the proposed prediction schemes significantly decrease the complexity of the encoder at a very low cost of quality and/or bit-rate.  相似文献   
8.
This paper presents a vision based software system which is developed in order to improve the precision measurement in machining technology. Precision measurement, monitoring and control are very important in manufacturing technology. In order to increase the accuracy of the measurement system; application of camera or vision is very useful. Automatic control is also vital for the measurement performance to be improved. During measurement of the gear profile; human monitoring sometimes may face danger as this is a stylus contact scanning system and the stylus is very small and thin as well as the probe moves with a maximum speed of 10 mm/s. The existing methods for gear measurement are either time consuming or expensive. This paper presents the successful implementation of the vision system in precision engineering which saves times and increases safety of the measurement system with the increment of the measurement performance. Color based stylus tracking algorithm is implemented in order to acquire better performance of the developed system. Stylus tracking based measurement is the key issue of the present research.  相似文献   
9.
This paper proposes a method of self-calibration of a binocular vision system based on a one-dimensional (1D) target. This method only needs a 1D target with two feature points and the distance between the two points is unknown. During the process of computation, the distance value can be set an arbitrary value which is near the actual distance value. Using the method proposed in this paper, we can get the parameters of the binocular vision system including internal parameters of the two cameras and the external parameters (but there exists a non-zero scale factor in the translation vector which is connected to the initial distance value we set), the distortion parameters of cameras and the three-dimensional coordinates of the two points in different positions. In this paper, we determine theoretically that the initial distance value will not influence the results, and also the results of numerical simulation and experiment example are shown to demonstrate the method. Most importantly, this method is insensitive to the initial distance value, and it is the biggest advantage. In a practical application, we can use a 1D target with unknown distance to calibrate the binocular system conveniently; also we can use this method to calibrate the camera in a large field of view with a small 1D target.  相似文献   
10.
How to make robot vision work robustly under varying lighting conditions and without the constraint of the current color-coded environment are two of the most challenging issues in the RoboCup community. In this paper, we present a robust omnidirectional vision sensor to deal with these issues for the RoboCup Middle Size League soccer robots, in which two novel algorithms are applied. The first one is a camera parameters auto-adjusting algorithm based on image entropy. The relationship between image entropy and camera parameters is verified by experiments, and camera parameters are optimized by maximizing image entropy to adapt the output of the omnidirectional vision to the varying illumination. The second one is a ball recognition method based on the omnidirectional vision without color classification. The conclusion is derived that the ball on the field can be imaged to be an ellipse approximately in our omnidirectional vision, and the arbitrary FIFA ball can be recognized by detecting the ellipse imaged by the ball. The experimental results show that a robust omnidirectional vision sensor can be realized by using the two algorithms mentioned above.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号